Refine your search:     
Report No.
 - 
Search Results: Records 1-2 displayed on this page of 2
  • 1

Presentation/Publication Type

Initialising ...

Refine

Journal/Book Title

Initialising ...

Meeting title

Initialising ...

First Author

Initialising ...

Keyword

Initialising ...

Language

Initialising ...

Publication Year

Initialising ...

Held year of conference

Initialising ...

Save select records

Oral presentation

Tracer dispersion simulation using locally-mesh refined lattice Boltzmann method based on observation data

Onodera, Naoyuki; Idomura, Yasuhiro; Kawamura, Takuma; Nakayama, Hiromasa; Shimokawabe, Takashi*; Aoki, Takayuki*

no journal, , 

The simulation for dispersion of radioactive substances attract high social interest, and it is required to satisfy both the speed and the accuracy. To perform a real-time simulation with high resolution mesh for the scale of human living area involving alleyways and buildings, it is required to develop simulation schemes which can fully utilize high computational performance. In this study, we introduced a nudging-based data assimilation method and a plant canopy model into the lattice Boltzmann method (LBM), and confirmed the accuracy of plume dispersion simulations for urban areas is improved.

Oral presentation

Study on acceleration of locally mesh-refined lattice Boltzmann simulation using GPU interconnect technology

Hasegawa, Yuta; Onodera, Naoyuki; Idomura, Yasuhiro

no journal, , 

To reduce memory usage and accelerate data communication in the locally-refined lattice Boltzmann code, we tried an intra-node multi-GPU implementation using Unified Memory in CUDA. In the microbenchmark test with uniform mesh, we achieved 96.4% and 94.6% parallel efficiency on weak and strong scaling of a 3D diffusion problem, and 99.3% and 56.5% parallel efficiency on weak and strong scaling of a D3Q27 lattice Boltzmann problem, respectively. In the locally mesh-refined lattice Boltzmann code, the present method could reduce memory usage by 25.5% from the Flat MPI implementation. However, this code showed only 9.0% parallel efficiency on strong scaling, which was worse than that on the Flat MPI implementation.

2 (Records 1-2 displayed on this page)
  • 1